Twitter, Others Slip on Removing Hate Speech, EU Review Says 

A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
TT
20

Twitter, Others Slip on Removing Hate Speech, EU Review Says 

A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)

Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year, according to European Union data released Thursday. 

The EU figures were published as part of an annual evaluation of online platforms' compliance with the 27-nation bloc's code of conduct on disinformation. 

Twitter wasn't alone — most other tech companies signed up to the voluntary code also scored worse. But the figures could foreshadow trouble for Twitter in complying with the EU's tough new online rules after owner Elon Musk fired many of the platform's 7,500 full-time workers and an untold number of contractors responsible for content moderation and other crucial tasks. 

The EU report found Twitter assessed just over half of the notifications it received about illegal hate speech within 24 hours, down from 82% in 2021. Facebook, Instagram and YouTube also took longer, while TikTok was the only one to improve. 

The amount of hate speech Twitter removed after it was flagged up slipped to 45.4% from 49.8% the year before. The removal rate at other platforms also slipped, except at YouTube, which surged. 

Twitter didn't respond to a request for comment. Emails to several staff on the company's European communications team bounced back as undeliverable. 

Musk's $44 billion acquisition of Twitter last month fanned widespread concern that purveyors of lies and misinformation would be allowed to flourish on the site. The billionaire Tesla CEO, who has frequently expressed his belief that Twitter had become too restrictive, has been reinstating suspended accounts, including former President Donald Trump's. 

Twitter faces more scrutiny in Europe by the middle of next year, when new EU rules aimed at protecting internet users’ online safety will start applying to the biggest online platforms. Violations could result in huge fines of up to 6% of a company's annual global revenue. 

France's online regulator Arcom said it received a reply from Twitter after writing to the company earlier this week to say it was concerned about the effect that staff departures would have on Twitter's “ability maintain a safe environment for its users.” 

Arcom also asked the company to confirm it can meet its “legal obligations” in fighting online hate speech and that it is committed to implementing the new EU online rules. Arcom said it received a response from Twitter and that it will “study their response,” without giving more details. 

Tech companies that signed up to the EU's disinformation code agree to commit to measures aimed at reducing disinformation and file regular reports on whether they’re living up to their promises, though there’s little in the way of punishment. 



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.